Musk and xAI Attribute Grok's Responses on "White Genocide" to "Internal Manipulation"; US-South Africa Debate Grows

Elon Musk's artificial intelligence, Grok, sparked a storm by generating responses to the unfounded "white genocide" theory in South Africa. While Musk's company, xAI, blames "unauthorized modification" and announces measures, the incident fuels debate about disinformation and diplomatic relations between Washington and Pretoria, which are particularly sensitive due to recent US immigration decisions.
The artificial intelligence (AI) Grok, developed by Elon Musk's company xAI, is at the center of a new controversy after issuing responses addressing the controversial and discredited theory of a "white genocide" in South Africa. This event, which occurred around May 14, has sparked a heated debate about the responsibility of AI in the spread of disinformation and further strained relations between the United States and South Africa.
xAI has publicly reacted by attributing these troubling results to an “unauthorized modification” or “internal tampering” with its system. According to the company, its code review processes were bypassed, allowing Grok’s behavior to be altered. The AI’s problematic responses not only mentioned the conspiracy theory but also linked it to completely unrelated topics, such as enterprise software and puppies, adding a layer of surrealism to the seriousness of the matter. Grok went so far as to claim that evidence of “white genocide” is “conflicting,” noting that while courts and analysts deny it, “some groups insist it’s real.”
This incident doesn't happen in a vacuum. Elon Musk himself has previously promoted the theory of "white genocide" in South Africa and has expressed opposition to the African country's land expropriation law and other racial policies. These previous statements from the tech mogul provide context that complicates simply accepting the explanation of a "technical glitch" or sabotage.
The situation is magnified by the current political climate in the United States, where the Trump administration recently granted refugee status to a group of white South Africans, based precisely on these baseless claims of "genocide." The South African government has categorically denied any persecution, calling the accusations false and misinformed.
"Our existing code review processes for prompt changes were bypassed in this incident. We will implement additional controls and measures to ensure xAI employees cannot modify prompts without review." – xAI statement.
The Grok case goes beyond a simple technical error; it underscores growing concerns about the role that artificial intelligence can play in spreading false and dangerous narratives. When these tools, linked to influential public figures like Musk, replicate conspiracy theories that these same individuals have amplified, the potential for harm multiplies. xAI's explanation of "internal manipulation," while seeking to disclaim responsibility, also raises questions about the security and internal control of these advanced technologies. In an era of ubiquitous AI, its ability to be (or appear to be) manipulated for disinformation purposes represents a considerable societal risk, eroding public trust.
Some analysts suggest that the specific nature of the “glitch”—dealing with such a sensitive topic previously associated with Musk—coupled with xAI’s swift public response could be interpreted not only as an attempt at damage control, but also as a way to keep the controversial topic in the public eye. xAI’s promised corrective actions, such as posting Grok’s system prompts on GitHub for public review and establishing a 24/7 monitoring team, seek to project an image of accountability. Nonetheless, the incident inevitably fuels debate about content moderation, inherent biases in AI, and the ethical responsibilities of tech leaders.
The coincidence of this event with political discussions in the United States about immigration and refugee status for white South Africans illustrates how technological controversies can intertwine and influence highly sensitive geopolitical and domestic policy debates. The "white genocide" narrative, amplified by an AI linked to a prominent South African-American tech figure, could indirectly fuel misinformation surrounding these sensitive issues.
The development of AI continues at a dizzying pace, and with it, the imperative to establish robust ethical and regulatory frameworks that prevent its misuse and ensure that these powerful tools serve the common good rather than exacerbating divisions and spreading falsehoods.
Follow us on our X La Verdad Noticias profile and stay up to date with the most important news of the day.
La Verdad Yucatán